3 research outputs found

    Deep Reinforcement Learning with Importance Weighted A3C for QoE enhancement in Video Delivery Services

    Full text link
    Adaptive bitrate (ABR) algorithms are used to adapt the video bitrate based on the network conditions to improve the overall video quality of experience (QoE). Recently, reinforcement learning (RL) and asynchronous advantage actor-critic (A3C) methods have been used to generate adaptive bit rate algorithms and they have been shown to improve the overall QoE as compared to fixed rule ABR algorithms. However, a common issue in the A3C methods is the lag between behaviour policy and target policy. As a result, the behaviour and the target policies are no longer synchronized which results in suboptimal updates. In this work, we present ALISA: An Actor-Learner Architecture with Importance Sampling for efficient learning in ABR algorithms. ALISA incorporates importance sampling weights to give more weightage to relevant experience to address the lag issues with the existing A3C methods. We present the design and implementation of ALISA, and compare its performance to state-of-the-art video rate adaptation algorithms including vanilla A3C implemented in the Pensieve framework and other fixed-rule schedulers like BB, BOLA, and RB. Our results show that ALISA improves average QoE by up to 25%-48% higher average QoE than Pensieve, and even more when compared to fixed-rule schedulers.Comment: Number of pages: 10, Number of figures: 9, Conference name: 24th IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM

    PPO-ABR: Proximal Policy Optimization based Deep Reinforcement Learning for Adaptive BitRate streaming

    Full text link
    Providing a high Quality of Experience (QoE) for video streaming in 5G and beyond 5G (B5G) networks is challenging due to the dynamic nature of the underlying network conditions. Several Adaptive Bit Rate (ABR) algorithms have been developed to improve QoE, but most of them are designed based on fixed rules and unsuitable for a wide range of network conditions. Recently, Deep Reinforcement Learning (DRL) based Asynchronous Advantage Actor-Critic (A3C) methods have recently demonstrated promise in their ability to generalise to diverse network conditions, but they still have limitations. One specific issue with A3C methods is the lag between each actor's behavior policy and central learner's target policy. Consequently, suboptimal updates emerge when the behavior and target policies become out of synchronization. In this paper, we address the problems faced by vanilla-A3C by integrating the on-policy-based multi-agent DRL method into the existing video streaming framework. Specifically, we propose a novel system for ABR generation - Proximal Policy Optimization-based DRL for Adaptive Bit Rate streaming (PPO-ABR). Our proposed method improves the overall video QoE by maximizing sample efficiency using a clipped probability ratio between the new and the old policies on multiple epochs of minibatch updates. The experiments on real network traces demonstrate that PPO-ABR outperforms state-of-the-art methods for different QoE variants
    corecore